事实证明,联邦学习(FL)是利用分布式资源的最有希望的范式之一,使一组客户能够协作培训机器学习模型,同时保持数据分散。对该主题兴趣的爆炸性增长导致了几个核心方面的快速发展,例如沟通效率,处理非IID数据,隐私和安全能力。但是,假设客户的培训集被标记,大多数FL仅处理监督任务。为了利用分布式边缘设备上的巨大未标记数据,我们旨在通过解决分散设置中的异常检测问题来扩展FL范式到无监督任务。特别是,我们提出了一种新颖的方法,在这种方法中,通过预处理阶段,客户分组为社区,每个社区都具有相似的多数(即近距离)模式。随后,每个客户社区都以联合方式训练相同的异常检测模型(即自动编码器)。然后共享所得模型并用于检测加入相应联合过程的同一社区客户端内的异常情况。实验表明我们的方法是强大的,它可以检测到与理想分区一致的社区,在这种分区中,知道具有相同近距离模式的客户组。此外,性能要比客户专门培训模型在本地数据上训练,并且与理想社区分区的联合模型相当的性能要好得多。
translated by 谷歌翻译
云自动缩放机制通常基于缩放集群的无功自动化规则,每当某些指标,例如情况下的平均CPU使用量超过预定义阈值。调整这些规则在缩放群集时变得特别繁琐,群集涉及不可忽略的时间来引导新实例,因为它经常在生产云服务中发生。要处理此问题,我们提出了一种基于在不久的将来进化的系统的自动缩放云服务的架构。我们的方法利用时序预测技术,如基于机器学习和人工神经网络的那些,以预测关键指标的未来动态,例如资源消耗度量,并在它们上应用基于阈值的缩放策略。结果是一种预测自动化策略,例如,能够在云应用程序的负载中自动预测峰值,并提前触发适当的缩放操作以适应流量的预期增加。我们将我们的方法称为开源OpenStack组件,它依赖于并扩展,并扩展了Monasca所提供的监控能力,从而增加了可以通过散热或尖林等管制成分来利用的预测度量。我们使用经常性神经网络和多层的Perceptron显示实验结果,作为预测器,与简单的线性回归和传统的非预测自动缩放策略进行比较。但是,所提出的框架允许根据需要轻松定制预测政策。
translated by 谷歌翻译
能够推荐在线社交网络中用户之间的链接对于用户与志趣相投的个人以及利用社交媒体信息发展业务的平台本身和第三方联系很重要。预测通常基于无监督或监督的学习,通常利用简单而有效的图形拓扑信息,例如普通邻居的数量。但是,我们认为有关个人个人社会结构的更丰富信息可能会带来更好的预测。在本文中,我们建议利用良好的社会认知理论来提高链接预测绩效。根据这些理论,个人平均将自己的社会关系安排在五个同心圆下,以减少亲密关系。我们假设不同圈子中的关系在预测新链接方面具有不同的重要性。为了验证这一主张,我们专注于流行的功能萃取预测算法(既无监督和监督),并将其扩展到包括社交圈的意识。我们验证了这些圆圈感知算法对几个基准测试的预测性能(包括其基线版本以及基于节点的链接和GNN链接预测),利用了两个Twitter数据集,其中包括一个视频游戏玩家和通用用户的社区。我们表明,社会意识通常可以在预测绩效方面有重大改进,击败了Node2Vec和Seal等最新解决方案,而不会增加计算复杂性。最后,我们表明可以使用社交意识来代替针对特定类别用户的分类器(可能是昂贵或不切实际)的。
translated by 谷歌翻译
Computational units in artificial neural networks follow a simplified model of biological neurons. In the biological model, the output signal of a neuron runs down the axon, splits following the many branches at its end, and passes identically to all the downward neurons of the network. Each of the downward neurons will use their copy of this signal as one of many inputs dendrites, integrate them all and fire an output, if above some threshold. In the artificial neural network, this translates to the fact that the nonlinear filtering of the signal is performed in the upward neuron, meaning that in practice the same activation is shared between all the downward neurons that use that signal as their input. Dendrites thus play a passive role. We propose a slightly more complex model for the biological neuron, where dendrites play an active role: the activation in the output of the upward neuron becomes optional, and instead the signals going through each dendrite undergo independent nonlinear filterings, before the linear combination. We implement this new model into a ReLU computational unit and discuss its biological plausibility. We compare this new computational unit with the standard one and describe it from a geometrical point of view. We provide a Keras implementation of this unit into fully connected and convolutional layers and estimate their FLOPs and weights change. We then use these layers in ResNet architectures on CIFAR-10, CIFAR-100, Imagenette, and Imagewoof, obtaining performance improvements over standard ResNets up to 1.73%. Finally, we prove a universal representation theorem for continuous functions on compact sets and show that this new unit has more representational power than its standard counterpart.
translated by 谷歌翻译
Humans have internal models of robots (like their physical capabilities), the world (like what will happen next), and their tasks (like a preferred goal). However, human internal models are not always perfect: for example, it is easy to underestimate a robot's inertia. Nevertheless, these models change and improve over time as humans gather more experience. Interestingly, robot actions influence what this experience is, and therefore influence how people's internal models change. In this work we take a step towards enabling robots to understand the influence they have, leverage it to better assist people, and help human models more quickly align with reality. Our key idea is to model the human's learning as a nonlinear dynamical system which evolves the human's internal model given new observations. We formulate a novel optimization problem to infer the human's learning dynamics from demonstrations that naturally exhibit human learning. We then formalize how robots can influence human learning by embedding the human's learning dynamics model into the robot planning problem. Although our formulations provide concrete problem statements, they are intractable to solve in full generality. We contribute an approximation that sacrifices the complexity of the human internal models we can represent, but enables robots to learn the nonlinear dynamics of these internal models. We evaluate our inference and planning methods in a suite of simulated environments and an in-person user study, where a 7DOF robotic arm teaches participants to be better teleoperators. While influencing human learning remains an open problem, our results demonstrate that this influence is possible and can be helpful in real human-robot interaction.
translated by 谷歌翻译
Explainability is a vibrant research topic in the artificial intelligence community, with growing interest across methods and domains. Much has been written about the topic, yet explainability still lacks shared terminology and a framework capable of providing structural soundness to explanations. In our work, we address these issues by proposing a novel definition of explanation that is a synthesis of what can be found in the literature. We recognize that explanations are not atomic but the product of evidence stemming from the model and its input-output and the human interpretation of this evidence. Furthermore, we fit explanations into the properties of faithfulness (i.e., the explanation being a true description of the model's decision-making) and plausibility (i.e., how much the explanation looks convincing to the user). Using our proposed theoretical framework simplifies how these properties are ope rationalized and provide new insight into common explanation methods that we analyze as case studies.
translated by 谷歌翻译
Fruit is a key crop in worldwide agriculture feeding millions of people. The standard supply chain of fruit products involves quality checks to guarantee freshness, taste, and, most of all, safety. An important factor that determines fruit quality is its stage of ripening. This is usually manually classified by experts in the field, which makes it a labor-intensive and error-prone process. Thus, there is an arising need for automation in the process of fruit ripeness classification. Many automatic methods have been proposed that employ a variety of feature descriptors for the food item to be graded. Machine learning and deep learning techniques dominate the top-performing methods. Furthermore, deep learning can operate on raw data and thus relieve the users from having to compute complex engineered features, which are often crop-specific. In this survey, we review the latest methods proposed in the literature to automatize fruit ripeness classification, highlighting the most common feature descriptors they operate on.
translated by 谷歌翻译
Graph Neural Networks (GNNs) achieve state-of-the-art performance on graph-structured data across numerous domains. Their underlying ability to represent nodes as summaries of their vicinities has proven effective for homophilous graphs in particular, in which same-type nodes tend to connect. On heterophilous graphs, in which different-type nodes are likely connected, GNNs perform less consistently, as neighborhood information might be less representative or even misleading. On the other hand, GNN performance is not inferior on all heterophilous graphs, and there is a lack of understanding of what other graph properties affect GNN performance. In this work, we highlight the limitations of the widely used homophily ratio and the recent Cross-Class Neighborhood Similarity (CCNS) metric in estimating GNN performance. To overcome these limitations, we introduce 2-hop Neighbor Class Similarity (2NCS), a new quantitative graph structural property that correlates with GNN performance more strongly and consistently than alternative metrics. 2NCS considers two-hop neighborhoods as a theoretically derived consequence of the two-step label propagation process governing GCN's training-inference process. Experiments on one synthetic and eight real-world graph datasets confirm consistent improvements over existing metrics in estimating the accuracy of GCN- and GAT-based architectures on the node classification task.
translated by 谷歌翻译
In recent years, reinforcement learning (RL) has become increasingly successful in its application to science and the process of scientific discovery in general. However, while RL algorithms learn to solve increasingly complex problems, interpreting the solutions they provide becomes ever more challenging. In this work, we gain insights into an RL agent's learned behavior through a post-hoc analysis based on sequence mining and clustering. Specifically, frequent and compact subroutines, used by the agent to solve a given task, are distilled as gadgets and then grouped by various metrics. This process of gadget discovery develops in three stages: First, we use an RL agent to generate data, then, we employ a mining algorithm to extract gadgets and finally, the obtained gadgets are grouped by a density-based clustering algorithm. We demonstrate our method by applying it to two quantum-inspired RL environments. First, we consider simulated quantum optics experiments for the design of high-dimensional multipartite entangled states where the algorithm finds gadgets that correspond to modern interferometer setups. Second, we consider a circuit-based quantum computing environment where the algorithm discovers various gadgets for quantum information processing, such as quantum teleportation. This approach for analyzing the policy of a learned agent is agent and environment agnostic and can yield interesting insights into any agent's policy.
translated by 谷歌翻译
This paper presents a methodology for integrating machine learning techniques into metaheuristics for solving combinatorial optimization problems. Namely, we propose a general machine learning framework for neighbor generation in metaheuristic search. We first define an efficient neighborhood structure constructed by applying a transformation to a selected subset of variables from the current solution. Then, the key of the proposed methodology is to generate promising neighbors by selecting a proper subset of variables that contains a descent of the objective in the solution space. To learn a good variable selection strategy, we formulate the problem as a classification task that exploits structural information from the characteristics of the problem and from high-quality solutions. We validate our methodology on two metaheuristic applications: a Tabu Search scheme for solving a Wireless Network Optimization problem and a Large Neighborhood Search heuristic for solving Mixed-Integer Programs. The experimental results show that our approach is able to achieve a satisfactory trade-off between the exploration of a larger solution space and the exploitation of high-quality solution regions on both applications.
translated by 谷歌翻译